MPI topologies and space discretisation

Topologies

CartesianTopology objects are used to described both the mpi grid layout and the space discretisations (global and local to each mpi process).

Each CartesianTopology has a “mesh” attributes, an object of type Mesh, which describes the grid on which the current mpi process works.

See Domains for how to build a space discretization.

Example of grids

With 2 processors and periodic domain:

global grid (node number):        0 1 2 3 4 5 6 7 8

proc 0 (global indices):      X X 0 1 2 3 X X
       (local indices) :      0 1 2 3 4 5 6 7
proc 1 :                                  X X 4 5 6 7 X X
                                          0 1 2 3 4 5 6 7

with 'X' for ghost points.
  • Node ‘8’ of the global grid is not represented on local mesh, because of periodicity. N8 == N0

  • on proc 1, we have:

    • local resolution = 8

    • global_start = 4

    • ‘computation nodes’ = 2, 3, 4, 5

    • ‘ghost nodes’ = 0, 1, 6, 7

Remarks:

  • all ‘global’ values refer to the discretization parameter. For example ‘global start’ = 2 means a point of index 2 in the global resolution.

  • only periodic grid are considered

Data transfers between topologies

Important remark: data redistribution between topologies are automatically inferred on Problem build. (see :ref:`problems` )

A continuous field may be associated with different topologies (in the sense of different space discretization and/or mpi processes distribution) depending on in which operators this field is involved. Consider for example a problem with stretching and Poisson operators, in 3 dimension, with the following sequence:

  • vorticity = stretching(vorticity, velocity), with a topology ‘topo_s’

  • velocity = poisson(vorticity), with a topology ‘topo_p’

For the first operator (stretching), the best data distribution will be a 3d mpi processes grid, with a space discretization including ghost points (to fit with finite different scheme requirements), while if Poisson is solved with fft, a 1 or 2D mpi processes grid will be required, and ghost points are not necessary. Therefore, fields present in both operators will be associated with two different topologies. Vorticity output from stretching will be used as input in Poisson. Since data distribution and local meshes won’t be the same for stretching and poisson, vorticity data need to be distributed from topo_s to topo_p. This is the role of Redistribute operator.

The correct sequence to solve the problem will be:

  • vorticity = stretching(vorticity, velocity), with a topology ‘topo_s’

  • redistribute(vorticity, topo_s, topo_p)

  • velocity = poisson(vorticity), with a topology ‘topo_p’

Second step means ‘update values of vorticity on topo_p with its values on topo_s for all mpi processes involved in topo_s and topo_p’

Two kind of redistribution are available in HySoP:

  • RedistributeIntra : for topologies/operator defined on the same mpi communicator

  • RedistributeInter : for topologies/operator defined on two different communicators (with no intersection between their sets of mpi processes)

In addition to the standard operator arguments, (see Operators in HySoP, basic ideas) redistribute operator needs a ‘source_topos’ and a ‘target_topo’. Source and target might be either a CartesianTopology or a dictionnary associating a topology to several fields as keys.

Many different topologies can coexist for a single problem. Indeed, each operator is defined on a specific topology. Which means that its fields are distributed through mpi processes and defined on local grids.

Thus, when an operator provided its output field as an input for the next operator, it may be required to redistribute its fields to fit with the topology of the next operator. Redistribute operators are made for that.

Example:

op1 = Poisson(vorticity, ...)
op2 = Stretching(vorticity, velocity)

We consider a global resolution of size \(N^3\) and 8 mpi processes.

For op1, the domain is cut into planes, and data are distributed through only one direction. Local grids will be of size \(N\times N\times\frac{N}{8}\).

For op2, a 3D topology will be at work and data are distributed through all directions. Local grids will be of size \(\frac{N}{2}^3\).

So, on each mpi process, vorticity data must be distributed from a \(N\times N\times\frac{N}{8}\) to a \(\frac{N}{2}^3\) grid. Which implies local copies and mpi communications:

op1 = Poisson(vorticity, ...)
redis = Redistribute(op1, op2, [vorticity])
op2 = Stretching(vorticity, velocity)